Sparse and silent coding in neural circuits
نویسندگان
چکیده
Sparse coding algorithms are about finding a linear basis in which signals can be represented by a small number of active (non-zero) coefficients. Such coding has many applications in science and engineering and is believed to play an important role in neural information processing. However, due to the computational complexity of the task, only approximate solutions provide the required efficiency (in terms of time). As new results show, under particular conditions there exist efficient solutions by minimizing the magnitude of the coefficients (‘l1-norm’) instead of minimizing the size of the active subset of features (‘l0-norm’). Straightforward neural implementation of these solutions is not likely, as they require a priori knowledge of the number of active features. Furthermore, these methods utilize iterative re-evaluation of the reconstruction error, which in turn implies that final sparse forms (featuring ‘population sparseness’) can only be reached through the formation of a series of non-sparse representations, which is in contrast with the overall sparse functioning of the neural systems (‘lifetime sparseness’). In this article we present a novel algorithm which integrates our previous ‘l0-norm’ model on spike based probabilistic optimization for sparse coding with ideas coming from novel ‘l1-norm’ solutions. The resulting algorithm allows neurally plausible implementation and does not require an exactly defined sparseness level thus it is suitable for representing natural stimuli with a varying number of features. We also demonstrate that the combined method significantly extends the domain where optimal solutions can be found by ‘l1-norm’ based algorithms.
منابع مشابه
Rice Classification and Quality Detection Based on Sparse Coding Technique
Classification of various rice types and determination of its quality is a major issue in the scientific and commercial fields associated with modern agriculture. In recent years, various image processing techniques are used to identify different types of agricultural products. There are also various color and texture-based features in order to achieve the desired results in this area. In this ...
متن کاملTowards a unified theory of efficient, predictive and sparse coding
A central goal in theoretical neuroscience is to predict the response properties of sensory neurons from first principles. Several theories have been proposed to this end. “Efficient coding” posits that neural circuits maximise information encoded about their inputs. “Sparse coding” posits that individual neurons respond selectively to specific, rarely occurring, features. Finally, “predictive ...
متن کاملFace Recognition using an Affine Sparse Coding approach
Sparse coding is an unsupervised method which learns a set of over-complete bases to represent data such as image and video. Sparse coding has increasing attraction for image classification applications in recent years. But in the cases where we have some similar images from different classes, such as face recognition applications, different images may be classified into the same class, and hen...
متن کاملSparse Coding via Thresholding and Local Competition in Neural Circuits
While evidence indicates that neural systems may be employing sparse approximations to represent sensed stimuli, the mechanisms underlying this ability are not understood. We describe a locally competitive algorithm (LCA) that solves a collection of sparse coding principles minimizing a weighted combination of mean-squared error and a coefficient cost function. LCAs are designed to be implement...
متن کاملThe role of zero synapses in unsupervised feature learning
Synapses in real neural circuits can take discrete values, including zero (silent or potential) synapses. The computational role of zero synapses in unsupervised feature learning of unlabeled noisy data is still unclear, thus it is important to understand how the sparseness of synaptic activity is shaped during learning and its relationship with receptive field formation. Here, we formulate thi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Neurocomputing
دوره 79 شماره
صفحات -
تاریخ انتشار 2012